perm filename CONSCI[W87,JMC] blob
sn#834810 filedate 1987-02-22 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 ā22-Feb-87 1150 JMC consciousness
C00010 ENDMK
Cā;
ā22-Feb-87 1150 JMC consciousness
To: ailist@SRI-STRIPE.ARPA
I hope this is the correct address for submissions, since it's
in the reply-to field of the messages.
This discussion of consciousness considers AI as a branch of
computer science rather than as a branch of biology or philosophy.
Therefore, it concerns why it is necessary to provide AI programs
with something like human consciousness in order that they should
behave intelligently in certain situations important for their
utility. Of course, human consciousness presumably has accidental
features that there would be no reason to imitate and other features
that are perhaps necessary consequences of its having evolved that
aren't necessary in programs designed from scratch. However, since
we don't yet understand AI very well, we shouldn't jump to conclusions
about what features of consciousness are unnecessary in order to
have the intellectual capabilities humans have and that we want our
programs to have.
Consciousness has many aspects and here are some.
1. We think about our bodies as physical objects to which
the same physical laws apply as apply to other physical objects.
This permits us to predict the behavior of our bodies in certain
situations, e.g. what might break them, and also permits us to
predict the behavior of other physical objects, e.g. we expect
them to have similar inertia. AI systems should apply physics
to their own bodies to the extent that they have them. Whether
they will need to use the analogy may depend on what knowledge
we choose to build in and what we will expect them to learn from
experience.
2. We can observe in a general way what we have been thinking
about and draw conclusions. For example, I have been thinking
about what to say about consciousness in this forum, and at present
it seems to be going rather well, so I'll continue composing
my comment rather than think about some specific aspect of
consciousness. I am, however, concerned that when I finish this
list I may have left our important aspects of consciousness that
we shall want in our programs. This kind of general observation
of the mental situation is important for making intellectual
plans, i.e. deciding what to think about. Very intelligent computer
programs will also need to examine what they have been thinking
about and reason about this information in order to decide whether
their intellectual goals are achievable. Unfortunately, AI isn't
ready for this yet, because we must solve some conceptual problems
first.
3. We compare ourselves intellectually with other people.
The concepts we use to think about our own minds are mainly learned
from other people. As with information about our bodies, we infer
from what we observe about ourselves to the mental qualities of
other people, and we also learn about ourselves from what we
learn about others. In so far as programs are made similar to
people or other programs, they may also have to learn from interaction.
4. We have goals about our own mental functioning. We would
like to be smarter, nicer and more content. It seems to me that
programs should also have such meta-goals, but I don't see that
we need to make them the same as people's. Consider that many
people have the goal of being more rational, e.g. less driven
by impulses. When we find ourselves with circular preferences,
e.g preferring A to B, B to C and C to A, we chide ourselves and
try to change. A computer program might well discover that its
heuristics give rise to circular preferences and try to modify
them in service of its grand goals. However, while people are
originally not fully rational, because our heritage provides
direct connections between our disparate drives and the actions
that achieve the goals they generate, it seems likely that
there is no reason to imitate all these features in computer programs.
Thus our programs should be able to compare the desirability
of future scenarios more readily than people do.
5. Besides our direct observations of our own mental
states, we have a lot of general information about them. We
can predict whether problems will be easy or difficult for us
and whether hypothetical events will be pleasing or not.
Programs will require similar capabilities.
Finally, it seems to me that the discussion of consciousness
in this digest has been too much an outgrowth of the ordinary
traditional philosophical discussions of the subject. It hasn't
sufficiently been influenced by Dennett's "design stance". I'm
sure that more aspects of human consciousness than I have been
able to list will require analogs in robotic systems. We should
also be alert to provide forms of self-observation and reasoning
about the programs own mental state that go beyond those evolution
has given us.